Training of subspace distribution clustering hidden Markov model

نویسندگان

  • Brian Kan-Wing Mak
  • Enrico Bocchieri
چکیده

In [2] and [7], we presented our novel subspace distribution clustering hiddenMarkovmodels (SDCHMMs)which can be converted from continuous density hidden Markov models (CDHMMs) by clustering subspaceGaussians in each stream over all models. Though such model conversion is simple and runs fast, it has two drawbacks: (1) it does not take advantage of the fewer model parameters in SDCHMMs — theoretically SDCHMMs may be trained with smaller amount of data; and, (2) it involves two separate optimization steps (first training CDHMMs, then clustering subspace Gaussians) and the resulting SDCHMMs are not guaranteed to be optimal. In this paper, we show how SDCHMMs may be trained directly from less speech data if we have a priori knowledge of their architecture. On the ATIS task, a speakerindependent, context-independent(CI) 20-stream SDCHMM system trained using our novel SDCHMM reestimation algorithmwith only 8 minutes of speech performs as well as a CDHMM system trained using conventional CDHMM reestimation algorithm with 105 minutes of speech.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Subspace Distribution Clustering HMM for Chinese Digit Speech Recognition

As a kind of statistical method, the technique of Hidden Markov Model (HMM) is widely used for speech recognition. In order to train the HMM to be more effective with much less amount of data, the Subspace Distribution Clustering Hidden Markov Model (SDCHMM), derived from the Continuous Density Hidden Markov Model (CDHMM), is introduced. With parameter tying, a new method to train SDCHMMs is de...

متن کامل

Training of context-dependent subspace distribution clustering hidden Markov model

Training of continuous density hidden Markov models (CDHMMs) is usually time-consuming and tedious due to the large number of model parameters involved. Recently we proposed a new derivative of CDHMM, the subspace distribution clustering hidden Markov model (SDCHMM) which tie CDHMMs at the ner level of subspace distributions, resulting in many fewer model parameters. An SDCHMM training algorith...

متن کامل

Direct training of subspace distribution clustering hidden Markov model

It generally takes a long time and requires a large amount of speech data to train hidden Markov models for a speech recognition task of a reasonably large vocabulary. Recently, we proposed a compact acoustic model called “subspace distribution clustering hidden Markov model” (SDCHMM) with an aim to save some of the training effort. SDCHMMs are derived from tying continuous density hidden Marko...

متن کامل

Microsoft Word - Hybridmodel2.dot

Today’s state-of-the-art speech recognition systems typically use continuous density hidden Markov models with mixture of Gaussian distributions. Such speech recognition systems have problems; they require too much memory to run, and are too slow for large vocabulary applications. Two approaches are proposed for the design of compact acoustic models, namely, subspace distribution clustering hid...

متن کامل

An Acoustic-Phonetic and a Model-Theoretic Analysis of Subspace Distribution Clustering Hidden Markov Models

Abstract. Recently, we proposed a new derivative to conventional continuous density hidden Markov modeling (CDHMM) that we call “subspace distribution clustering hidden Markov modeling” (SDCHMM). SDCHMMs can be created by tying low-dimensional subspace Gaussians in CDHMMs. In tasks we tried, usually only 32–256 subspace Gaussian prototypes were needed in SDCHMM-based system to maintain recognit...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1998